Search results for "Rejection sampling"
showing 10 items of 14 documents
Group Metropolis Sampling
2017
Monte Carlo (MC) methods are widely used for Bayesian inference and optimization in statistics, signal processing and machine learning. Two well-known class of MC methods are the Importance Sampling (IS) techniques and the Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce the Group Importance Sampling (GIS) framework where different sets of weighted samples are properly summarized with one summary particle and one summary weight. GIS facilitates the design of novel efficient MC techniques. For instance, we present the Group Metropolis Sampling (GMS) algorithm which produces a Markov chain of sets of weighted samples. GMS in general outperforms other multiple try schemes…
Recycling Gibbs sampling
2017
Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning and statistics. The key point for the successful application of the Gibbs sampler is the ability to draw samples from the full-conditional probability density functions efficiently. In the general case this is not possible, so in order to speed up the convergence of the chain, it is required to generate auxiliary samples. However, such intermediate information is finally disregarded. In this work, we show that these auxiliary samples can be recycled within the Gibbs estimators, improving their efficiency with no extra cost. Theoretical and exhaustive numerical co…
Parsimonious adaptive rejection sampling
2017
Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities. The ARS schemes yield a sequence of proposal functions that converge toward the target, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computationally demanding each time it is updated. We propose the Parsimonious Adaptive Rejection Sampling (PARS) method, where an efficient trade-off between acceptance rate and proposal complexity is obtained. Thus, the resulting algorithm is f…
Grapham: Graphical models with adaptive random walk Metropolis algorithms
2008
Recently developed adaptive Markov chain Monte Carlo (MCMC) methods have been applied successfully to many problems in Bayesian statistics. Grapham is a new open source implementation covering several such methods, with emphasis on graphical models for directed acyclic graphs. The implemented algorithms include the seminal Adaptive Metropolis algorithm adjusting the proposal covariance according to the history of the chain and a Metropolis algorithm adjusting the proposal scale based on the observed acceptance probability. Different variants of the algorithms allow one, for example, to use these two algorithms together, employ delayed rejection and adjust several parameters of the algorithm…
Exact simulation of first exit times for one-dimensional diffusion processes
2019
International audience; The simulation of exit times for diffusion processes is a challenging task since it concerns many applications in different fields like mathematical finance, neuroscience, reliability horizontal ellipsis The usual procedure is to use discretization schemes which unfortunately introduce some error in the target distribution. Our aim is to present a new algorithm which simulates exactly the exit time for one-dimensional diffusions. This acceptance-rejection algorithm requires to simulate exactly the exit time of the Brownian motion on one side and the Brownian position at a given time, constrained not to have exit before, on the other side. Crucial tools in this study …
Avoiding Boundary Effects in Wang-Landau Sampling
2003
A simple modification of the ``Wang-Landau sampling'' algorithm removes the systematic error that occurs at the boundary of the range of energy over which the random walk takes place in the original algorithm.
A new strategy for effective learning in population Monte Carlo sampling
2016
In this work, we focus on advancing the theory and practice of a class of Monte Carlo methods, population Monte Carlo (PMC) sampling, for dealing with inference problems with static parameters. We devise a new method for efficient adaptive learning from past samples and weights to construct improved proposal functions. It is based on assuming that, at each iteration, there is an intermediate target and that this target is gradually getting closer to the true one. Computer simulations show and confirm the improvement of the proposed strategy compared to the traditional PMC method on a simple considered scenario.
Anti-tempered Layered Adaptive Importance Sampling
2017
Monte Carlo (MC) methods are widely used for Bayesian inference in signal processing, machine learning and statistics. In this work, we introduce an adaptive importance sampler which mixes together the benefits of the Importance Sampling (IS) and Markov Chain Monte Carlo (MCMC) approaches. Different parallel MCMC chains provide the location parameters of the proposal probability density functions (pdfs) used in an IS method. The MCMC algorithms consider a tempered version of the posterior distribution as invariant density. We also provide an exhaustive theoretical support explaining why, in the presented technique, even an anti-tempering strategy (reducing the scaling of the posterior) can …
Monte-Carlo Methods
2003
The article conbtains sections titled: 1 Introduction and Overview 2 Random-Number Generation 2.1 General Introduction 2.2 Properties That a Random-Number Generator (RNG) Should Have 2.3 Comments about a Few Frequently Used Generators 3 Simple Sampling of Probability Distributions Using Random Numbers 3.1 Numerical Estimation of Known Probability Distributions 3.2 “Importance Sampling” versus “Simple Sampling” 3.3 Monte-Carlo as a Method of Integration 3.4 Infinite Integration Space 3.5 Random Selection of Lattice Sites 3.6 The Self-Avoiding Walk Problem 3.7 Simple Sampling versus Biased Sampling: the Example of SAWs Continued 4 Survey of Applications to Simulation of Transport Processes 4.…
Adaptive Metropolis algorithm using variational Bayesian adaptive Kalman filter
2013
Markov chain Monte Carlo (MCMC) methods are powerful computational tools for analysis of complex statistical problems. However, their computational efficiency is highly dependent on the chosen proposal distribution, which is generally difficult to find. One way to solve this problem is to use adaptive MCMC algorithms which automatically tune the statistics of a proposal distribution during the MCMC run. A new adaptive MCMC algorithm, called the variational Bayesian adaptive Metropolis (VBAM) algorithm, is developed. The VBAM algorithm updates the proposal covariance matrix using the variational Bayesian adaptive Kalman filter (VB-AKF). A strong law of large numbers for the VBAM algorithm is…